为了解决疫苗犹豫不决,这会损害COVID-19疫苗接种运动的努力,必须了解公共疫苗接种态度并及时掌握其变化。尽管具有可靠性和可信赖性,但基于调查的传统态度收集是耗时且昂贵的,无法遵循疫苗接种态度的快速发展。我们利用社交媒体上的文本帖子通过提出深入学习框架来实时提取和跟踪用户的疫苗接种立场。为了解决与疫苗相关话语中常用的讽刺和讽刺性的语言特征的影响,我们将用户社交网络邻居的最新帖子集成到框架中,以帮助检测用户的真实态度。根据我们从Twitter的注释数据集,与最新的仅文本模型相比,从我们框架实例化的模型可以提高态度提取的性能高达23%。使用此框架,我们成功地验证了使用社交媒体跟踪现实生活中疫苗接种态度的演变的可行性。我们进一步显示了对我们的框架的一种实际用途,它可以通过从社交媒体中感知到的信息来预测用户疫苗犹豫的变化的可能性。
translated by 谷歌翻译
疫苗的犹豫被认为是欧洲和美国在欧洲疫苗充足疫苗的疫苗停滞比率停滞的主要原因之一。快速准确地掌握公众对疫苗接种的态度对于解决疫苗犹豫至关重要,社交媒体平台已被证明是公众意见的有效来源。在本文中,我们描述了与Covid-19疫苗有关的推文数据集的收集和发布。该数据集由从西欧收集的2,198,090条推文组成,其中17,934条带有发起者的疫苗接种立场。我们的注释将有助于使用和开发数据驱动的模型来从社交媒体帖子中提取疫苗接种态度,从而进一步确认社交媒体在公共卫生监视中的力量。为了为未来的研究奠定基础,我们不仅对数据集进行了统计分析和可视化,而且还评估和比较了疫苗接种立场提取中已建立的基于文本的基准测试的性能。我们在实践中证明了我们的数据的一种潜在用途,以跟踪公共Covid-19-19疫苗接种态度的时间变化。
translated by 谷歌翻译
机器人需要多种互动模式来与人类在复杂的工业任务中进行稳健合作。我们开发了共存和共存(可可)人类机器人协作系统。共存模式使机器人能够在共享空间中独立地与人类在不同子任务上合作。合作模式使机器人能够遵循人类的指导并恢复失败。人类意图跟踪算法将人类和机器人运动测量作为输入,并提供了交互模式的开关。我们证明了可可系统在用例中类似于现实世界多步组件任务的有效性。
translated by 谷歌翻译
协作机器人需要有效的人类意图估算,以便在诸如人类意图不断变化的工业集会等结构化任务中安全,平稳地与人类合作。我们提出了意图跟踪的概念,并引入了一个协作机器人系统,该系统同时跟踪层次级别的意图。跟踪高级意图以估计人类的相互作用模式,并使机器人能够(1)避免与人碰撞以最大程度地减少中断或(2)帮助人类纠正失败。低级意图估算为机器人提供了特定任务的信息,以进行并发执行。我们在UR5E机器人上实现了该系统,并通过消融试验性研究在组装用例中展示了强大的,无缝和人体工程学的人类机器人协作。
translated by 谷歌翻译
Haptic feedback can improve safety of teleoperated robots when situational awareness is limited or operators are inattentive. Standard potential field approaches increase haptic resistance as an obstacle is approached, which is desirable when the operator is unaware of the obstacle but undesirable when the movement is intentional, such as when the operator wishes to inspect or manipulate an object. This paper presents a novel haptic teleoperation framework that estimates the operator's attentiveness to dampen haptic feedback for intentional movement. A biologically-inspired attention model is developed based on computational working memory theories to integrate visual saliency estimation with spatial mapping. This model generates an attentiveness map in real-time, and the haptic rendering system generates lower haptic forces for obstacles that the operator is estimated to be aware of. Experimental results in simulation show that the proposed framework outperforms haptic teleoperation without attentiveness estimation in terms of task performance, robot safety, and user experience.
translated by 谷歌翻译
Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on two datasets with different organs and modalities, where it substantially outperforms existing techniques.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
As one of the prevalent methods to achieve automation systems, Imitation Learning (IL) presents a promising performance in a wide range of domains. However, despite the considerable improvement in policy performance, the corresponding research on the explainability of IL models is still limited. Inspired by the recent approaches in explainable artificial intelligence methods, we proposed a model-agnostic explaining framework for IL models called R2RISE. R2RISE aims to explain the overall policy performance with respect to the frames in demonstrations. It iteratively retrains the black-box IL model from the randomized masked demonstrations and uses the conventional evaluation outcome environment returns as the coefficient to build an importance map. We also conducted experiments to investigate three major questions concerning frames' importance equality, the effectiveness of the importance map, and connections between importance maps from different IL models. The result shows that R2RISE successfully distinguishes important frames from the demonstrations.
translated by 谷歌翻译
Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this paper, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.
translated by 谷歌翻译
We propose a distributionally robust return-risk model for Markov decision processes (MDPs) under risk and reward ambiguity. The proposed model optimizes the weighted average of mean and percentile performances, and it covers the distributionally robust MDPs and the distributionally robust chance-constrained MDPs (both under reward ambiguity) as special cases. By considering that the unknown reward distribution lies in a Wasserstein ambiguity set, we derive the tractable reformulation for our model. In particular, we show that that the return-risk model can also account for risk from uncertain transition kernel when one only seeks deterministic policies, and that a distributionally robust MDP under the percentile criterion can be reformulated as its nominal counterpart at an adjusted risk level. A scalable first-order algorithm is designed to solve large-scale problems, and we demonstrate the advantages of our proposed model and algorithm through numerical experiments.
translated by 谷歌翻译